Solving the same-different task with convolutional neural networks
نویسندگان
چکیده
Deep learning demonstrated major abilities in solving many kinds of different real-world problems computer vision literature. However, they are still strained by simple reasoning tasks that humans consider easy to solve. In this work, we probe current state-of-the-art convolutional neural networks on a difficult set known as the same-different problems. All require same prerequisite be solved correctly: understanding if two random shapes inside image or not. With experiments carried out demonstrate residual connections, and more generally skip seem have only marginal impact proposed particular, experiment with DenseNets, examine contribution recurrent connections already tested architectures, ResNet-18, CorNet-S respectively. Our show older feed-forward networks, AlexNet VGG, almost unable learn problems, except some specific scenarios. We recently introduced architectures can converge even cases where important parts their architecture removed. finally carry zero-shot generalization tests, discover these scenarios stronger overall test accuracy. On four from SVRT dataset, reach results respect previous approaches, obtaining super-human performances three
منابع مشابه
Cystoscopy Image Classication Using Deep Convolutional Neural Networks
In the past three decades, the use of smart methods in medical diagnostic systems has attractedthe attention of many researchers. However, no smart activity has been provided in the eld ofmedical image processing for diagnosis of bladder cancer through cystoscopy images despite the highprevalence in the world. In this paper, two well-known convolutional neural networks (CNNs) ...
متن کاملConvergent Learning: Do different neural networks learn the same representations?
Recent successes in training large, deep neural networks (DNNs) have prompted active investigation into the underlying representations learned on their intermediate layers. Such research is difficult because it requires making sense of non-linear computations performed by millions of learned parameters. However, despite the difficulty, such research is valuable because it increases our ability ...
متن کاملConvolutional Neural Networks
Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a par...
متن کاملTiled convolutional neural networks
Convolutional neural networks (CNNs) have been successfully applied to many tasks such as digit and object recognition. Using convolutional (tied) weights significantly reduces the number of parameters that have to be learned, and also allows translational invariance to be hard-coded into the architecture. In this paper, we consider the problem of learning invariances, rather than relying on ha...
متن کاملUnderstanding Convolutional Neural Networks
This seminar paper focusses on convolutional neural networks and a visualization technique allowing further insights into their internal operation. After giving a brief introduction to neural networks and the multilayer perceptron, we review both supervised and unsupervised training of neural networks in detail. In addition, we discuss several approaches to regularization. The second section in...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Pattern Recognition Letters
سال: 2021
ISSN: ['1872-7344', '0167-8655']
DOI: https://doi.org/10.1016/j.patrec.2020.12.019